229 research outputs found

    Estimating Optimal Thinning and Rotation for Mixed-Species Timber Stands Using a Random Search Algorithm

    Get PDF
    The problem of optimal density over time for even-aged, mixed-species stands is formulated as a nonlinear-integer programming problem with numbers of trees cut by species and diameter class as decision variables. The model is formulated using a stand-table projection growth model to predict mixed-speciesg rowth and stand-structureO. ptimal thinning and final harvest age are estimated simultaneously using heuristic random search algorithms. For sample problemsw ith two speciesr, andom searchm ethodsp rovide near-optimalc uttings trategiesw ith very little computer time or memory. Optimal solutions are estimated for problems with eight initial species/diameter class groups, projected for up to three discrete growth periods. Such solution methods merit further study for evaluating complex stand- and forest-level decisions. FOREST Scl. 31:303-315

    Packing While Traveling: Mixed Integer Programming for a Class of Nonlinear Knapsack Problems

    Full text link
    Packing and vehicle routing problems play an important role in the area of supply chain management. In this paper, we introduce a non-linear knapsack problem that occurs when packing items along a fixed route and taking into account travel time. We investigate constrained and unconstrained versions of the problem and show that both are NP-hard. In order to solve the problems, we provide a pre-processing scheme as well as exact and approximate mixed integer programming (MIP) solutions. Our experimental results show the effectiveness of the MIP solutions and in particular point out that the approximate MIP approach often leads to near optimal results within far less computation time than the exact approach

    A reformulation–linearization–convexification algorithm for optimal correction of an inconsistent system of linear constraints

    Get PDF
    Abstract In this paper, an algorithm is introduced to find an optimal solution for an optimization problem that arises in total least squares with inequality constraints, and in the correction of infeasible linear systems of inequalities. The stated problem is a nonconvex program with a special structure that allows the use of a reformulation-linearization-convexification technique for its solution. A branch-and-bound method for finding a global optimum for this problem is introduced based on this technique. Some computational experiments are included to highlight the efficacy of the proposed methodology

    Optimal Uncertainty Quantification

    Get PDF
    We propose a rigorous framework for Uncertainty Quantification (UQ) in which the UQ objectives and the assumptions/information set are brought to the forefront. This framework, which we call \emph{Optimal Uncertainty Quantification} (OUQ), is based on the observation that, given a set of assumptions and information about the problem, there exist optimal bounds on uncertainties: these are obtained as values of well-defined optimization problems corresponding to extremizing probabilities of failure, or of deviations, subject to the constraints imposed by the scenarios compatible with the assumptions and information. In particular, this framework does not implicitly impose inappropriate assumptions, nor does it repudiate relevant information. Although OUQ optimization problems are extremely large, we show that under general conditions they have finite-dimensional reductions. As an application, we develop \emph{Optimal Concentration Inequalities} (OCI) of Hoeffding and McDiarmid type. Surprisingly, these results show that uncertainties in input parameters, which propagate to output uncertainties in the classical sensitivity analysis paradigm, may fail to do so if the transfer functions (or probability distributions) are imperfectly known. We show how, for hierarchical structures, this phenomenon may lead to the non-propagation of uncertainties or information across scales. In addition, a general algorithmic framework is developed for OUQ and is tested on the Caltech surrogate model for hypervelocity impact and on the seismic safety assessment of truss structures, suggesting the feasibility of the framework for important complex systems. The introduction of this paper provides both an overview of the paper and a self-contained mini-tutorial about basic concepts and issues of UQ.Comment: 90 pages. Accepted for publication in SIAM Review (Expository Research Papers). See SIAM Review for higher quality figure

    Computation with Polynomial Equations and Inequalities arising in Combinatorial Optimization

    Full text link
    The purpose of this note is to survey a methodology to solve systems of polynomial equations and inequalities. The techniques we discuss use the algebra of multivariate polynomials with coefficients over a field to create large-scale linear algebra or semidefinite programming relaxations of many kinds of feasibility or optimization questions. We are particularly interested in problems arising in combinatorial optimization.Comment: 28 pages, survey pape

    An FPTAS for optimizing a class of low-rank functions over a polytope

    Get PDF
    We present a fully polynomial time approximation scheme (FPTAS) for optimizing a very general class of non-linear functions of low rank over a polytope. Our approximation scheme relies on constructing an approximate Pareto-optimal front of the linear functions which constitute the given low-rank function. In contrast to existing results in the literature, our approximation scheme does not require the assumption of quasi-concavity on the objective function. For the special case of quasi-concave function minimization, we give an alternative FPTAS, which always returns a solution which is an extreme point of the polytope. Our technique can also be used to obtain an FPTAS for combinatorial optimization problems with non-linear objective functions, for example when the objective is a product of a fixed number of linear functions. We also show that it is not possible to approximate the minimum of a general concave function over the unit hypercube to within any factor, unless P = NP. We prove this by showing a similar hardness of approximation result for supermodular function minimization, a result that may be of independent interest

    Efficient computation of the outer hull of a discrete path

    Get PDF
    We present here a linear time and space algorithm for computing the outer hull of any discrete path encoded by its Freeman chain code. The basic data structure uses an enriched version of the data structure introduced by Brlek, Koskas and Provençal: using quadtrees for representing points in the discrete plane ℤ×ℤ with neighborhood links, deciding path intersection is achievable in linear time and space. By combining the well-known wall follower algorithm for traversing mazes, we obtain the desired result with two passes resulting in a global linear time and space algorithm. As a byproduct, the convex hull is obtained as well

    Towards Machine Wald

    Get PDF
    The past century has seen a steady increase in the need of estimating and predicting complex systems and making (possibly critical) decisions with limited information. Although computers have made possible the numerical evaluation of sophisticated statistical models, these models are still designed \emph{by humans} because there is currently no known recipe or algorithm for dividing the design of a statistical model into a sequence of arithmetic operations. Indeed enabling computers to \emph{think} as \emph{humans} have the ability to do when faced with uncertainty is challenging in several major ways: (1) Finding optimal statistical models remains to be formulated as a well posed problem when information on the system of interest is incomplete and comes in the form of a complex combination of sample data, partial knowledge of constitutive relations and a limited description of the distribution of input random variables. (2) The space of admissible scenarios along with the space of relevant information, assumptions, and/or beliefs, tend to be infinite dimensional, whereas calculus on a computer is necessarily discrete and finite. With this purpose, this paper explores the foundations of a rigorous framework for the scientific computation of optimal statistical estimators/models and reviews their connections with Decision Theory, Machine Learning, Bayesian Inference, Stochastic Optimization, Robust Optimization, Optimal Uncertainty Quantification and Information Based Complexity.Comment: 37 page
    • …
    corecore